In mathematics, an inner product space is a vector space with the additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis.
An inner product space is sometimes also called a pre-Hilbert space, since its completion with respect to the metric, induced by its inner product, is a Hilbert space. That is, if a pre-Hilbert space is complete with respect to the metric arising from its inner product (and norm), then it is called a Hilbert space.
Inner product spaces were referred to as unitary spaces in earlier work, although this terminology is now rarely used.
Contents |
In this article, the field of scalars denoted is either the field of real numbers
or the field of complex numbers
.
Formally, an inner product space is a vector space V over the field together with an inner product, i.e., with a map
that satisfies the following three axioms for all vectors and all scalars
:[1][2]
Notice that conjugate symmetry implies that is real for all
, since we have
Conjugate symmetry and linearity in the first variable gives
so an inner product is a sesquilinear form. Conjugate symmetry is also called Hermitian symmetry, and a conjugate symmetric sesquilinear form is called a Hermitian form While the above axioms are more mathematically economical, a compact verbal definition of an inner product is a positive-definite Hermitian form.
In the case of , conjugate-symmetric reduces to symmetric, and sesquilinear reduces to bilinear. So, an inner product on a real vector space is a positive-definite symmetric bilinear form.
From the linearity property it is derived that implies
while from the positive-definiteness axiom we obtain the converse,
implies
Combining these two, we have the property that
if and only if
The property of an inner product space that
is known as additivity.
Remark: Some authors, especially in physics and matrix algebra, prefer to define the inner product and the sesquilinear form with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product as
(the bra-ket notation of quantum mechanics), respectively
(dot product as a case of the convention of forming the matrix product AB as the dot products of rows of A with columns of B). Here the kets and columns are identified with the vectors of V and the bras and rows with the dual vectors or linear functionals of the dual space V*, with conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature, e.g., Emch [1972], taking
to be conjugate linear in x rather than y. A few instead find a middle ground by recognizing both
and
as distinct notations differing only in which argument is conjugate linear.
There are various technical reasons why it is necessary to restrict the basefield to and
in the definition. Briefly, the basefield has to contain an ordered subfield (in order for non-negativity to make sense) and therefore has to have characteristic equal to 0. This immediately excludes finite fields. The basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of
or
will suffice for this purpose, e.g., the algebraic numbers, but when it is a proper subfield (i.e., neither
nor
) even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over
or
, such as those used in quantum computation, are automatically metrically complete and hence Hilbert spaces.
In some cases we need to consider non-negative semi-definite sesquilinear forms. This means that is only required to be non-negative. We show how to treat these below.
A linear space with a norm such as:
where p ≠2 is a normed space but not an inner product space, because this norm does not satisfy the parallelogram equality required of a norm to have an inner product associated with it.[3][4]
However, inner product spaces have a naturally defined norm based upon the inner product of the space itself that does satisfy the parallelogram equality:
This is well defined by the nonnegativity axiom of the definition of inner product space. The norm is thought of as the length of the vector x. Directly from the axioms, we can prove the following:
The Parallelogram law is, in fact, a necessary and sufficient condition for the existence of a scalar product, corresponding to a given norm. If it holds, the scalar product is defined by the polarization identity:
Let V be a finite dimensional inner product space of dimension n. Recall that every basis of V consists of exactly n linearly independent vectors. Using the Gram-Schmidt Process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if
if
and
for each i.
This definition of orthonormal basis generalizes to the case of infinite dimensional inner product spaces in the following way. Let V be a any inner product space. Then a collection is a basis for V if the subspace of V generated by finite linear combinations of elements of E is dense in V (in the norm induced by the inner product). We say that E is an orthonormal basis for V if it is a basis and
if
and
for all
.
Using an infinite-dimensional analog of the Gram-Schmidt process one may show:
Theorem. Any separable inner product space V has an orthonormal basis.
Using the Hausdorff Maximal Principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that
Theorem. Any complete inner product space V has an orthonormal basis.
The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references).
Proof |
---|
Recall that the dimension of an inner product space is the cardinality of the maximal orthonormal system that it contains. Let K be a Hilbert Space (a complete inner product space) with ![]() ![]() ![]() ![]() Our first task is to construct a Hilbert Space H with a dense subspace G so that the dimension of G is strictly smaller than the dimension of H'.' Let L be a Hilbert Space of dimensions c, and define a linear transformation
Define
Hence, we see that We have now constructed a Hilbert Space H with a dense subspace G so that
We claim that there does not exist an orthonormal basis for G. Suppose for the sake contradiction that |
Parseval's identity leads immediately to the following theorem:
Theorem. Let V be a separable inner product space and {ek}k an orthonormal basis of V. Then the map
is an isometric linear map V → ℓ 2 with a dense image.
This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided â„“ 2 is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series:
Theorem. Let V be the inner product space . Then the sequence (indexed on set of all integers) of continuous functions
is an orthonormal basis of the space with the L2 inner product. The mapping
is an isometric linear map with dense image.
Orthogonality of the sequence {ek}k follows immediately from the fact that if k ≠j, then
Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.
Several types of linear maps A from an inner product space V to an inner product space W are of relevance:
From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces.
Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened.
If V is a vector space and a semi-definite sesquilinear form, then the function ‖x‖ =
makes sense and satisfies all the properties of norm except that ‖x‖ = 0 does not imply x = 0 (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient W = V/{ x : ‖x‖ = 0}. The sesquilinear form
factors through W.
This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets.
Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero x there exists some y such that though y need not equal x; in other words, the induced map to the dual space
is an isomorphism. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with nonzero weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index.
Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the isomorphism ) and thus hold more generally.
The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a 1×n covector with an n×1 vector, yielding a 1×1 matrix (a scalar), while the outer product is the product of an m×1 vector with a 1×n covector, yielding an m×n matrix. Note that the outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the trace of the outer product (trace only being properly defined for square matrices).
On an inner product space, or more generally a vector space with a nondegenerate form (so an isomorphism ) vectors can be sent to covectors (in coordinates, via transpose), so one can take the inner product and outer product of two vectors, not simply of a vector and a covector.
In a quip: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".
More abstractly, the outer product is the bilinear map sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1,1)), while the inner product is the bilinear evaluation map
given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction.
The inner product and outer product should not be confused with the interior product and exterior product, which are instead operations on vector fields and differential forms, or more generally on the exterior algebra.
As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is sometimes called the "outer product". This usage is discouraged, however, and the inner product is more correctly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).
|